#oracle data pump export example
Explore tagged Tumblr posts
Text
Export from 11g and import to 19c
In this article, we are going to learn how to export from 11g and import to 19c. As you know the Oracle DATA PUMP is a very powerful tool and using the Export and Import method we can load the from the lower Oracle database version to the higher Oracle Database version and vice versa. Step 1. Create a backup directory To perform the export/import first we need to create a backup directory at…
View On WordPress
#expdp/impdp from 12c to 19c#export and import in oracle 19c with examples#export from 11g and import to 19c#Export from 11g and import to 19c example#Export from 11g and import to 19c oracle#export from 11g and import to 19c with examples#oracle 19c export/import#oracle data pump export example#oracle database migration from 11g to 19c using data pump
0 notes
Text
Freeplane cd cover

FREEPLANE CD COVER MANUAL
FREEPLANE CD COVER PATCH
Because the current database can remain available, you can, for example, keep an existing production database running while the new Oracle Database 10 g database is being built at the same time by Export/Import. However, if a consistent snapshot of the database is required (for data integrity or other purposes), then the database must run in restricted mode or must otherwise be protected from changes during the export procedure. The Export/Import upgrade method does not change the current database, which enables the database to remain available throughout the upgrade process. The following sections highlight aspects of Export/Import that may help you to decide whether to use Export/Import to upgrade your database.Įxport/Import Effects on Upgraded Databases When importing data from an earlier release, the Oracle Database 10 g Import utility makes appropriate changes to data definitions as it reads earlier releases' export dump files. However, the new Oracle Database 10 g database must already exist before the export dump file can be copied into it. Then, the Import utility of the new Oracle Database 10 g release loads the exported data into a new database. The current database's Export utility copies specified parts of the database into an export dump file. Export/Import can copy a subset of the data in a database, leaving the database unchanged. You can use either the Oracle Data Pump Export and Import utilities (available as of Oracle Database 10 g) or the original Export and Import utilities to perform a full or partial export from your database, followed by a full or partial import into a new Oracle Database 10 g database.
FREEPLANE CD COVER MANUAL
Unlike the DBUA or a manual upgrade, the Export/Import utilities physically copy data from your current database to a new database. To upgrade to the new Oracle Database 10 g release, follow the instructions in Chapter 3, "Upgrading to the New Oracle Database 10 g Release".
FREEPLANE CD COVER PATCH
However, you must first apply the specified minimum patch release indicated in the Current Release column. Then, upgrade the intermediate release database to the new Oracle Database 10 g release using the instructions in Chapter 3, "Upgrading to the New Oracle Database 10 g Release".ĭirect upgrade from 8.1.7.4, 9.0.1.4 or higher, 9.2.0.4 or higher, and 10.1.0.2 or higher to the newest Oracle Database 10 g release is supported. When upgrading to an intermediate Oracle Database release, follow the instructions in the intermediate release's documentation. Upgrade to an intermediate Oracle Database release before you can upgrade to the new Oracle Database 10 g release, as follows:ħ.3.3 (or lower) -> 7.3.4 -> 8.1.7.4 -> 10.2

0 notes
Text
Automating File Transfers to Amazon RDS for Oracle databases
Many integrated Oracle applications use external files as input. Oracle databases access such files via a logical object called a database directory. Apart from accessing the application files, Oracle databases also use database directories to access data pump backups, external tables, reading logs, and more. In the traditional on-premises client-server architecture, the database administrator has to transfer the files to be processed from one server to another, log in to the database server to create an Oracle database directory object, and run the aforementioned tools. With Amazon RDS for Oracle, some of those tasks are abstracted, as we show throughout this post. Amazon RDS for Oracle gives you the benefits of a managed service solution that makes it easy to set up, operate, and scale Oracle deployments in the AWS Cloud. Amazon RDS for Oracle allows you to access files via database directory objects and native tools in the same ways you can access your on-premises Oracle databases. The main difference between Amazon RDS for Oracle and on-premises Oracle deployments is that Amazon RDS for Oracle is a managed service, therefore access to the underlying host is restricted in order to offer a fully managed service. Because you can’t access the underlying operating system for your database in Amazon RDS for Oracle, to automate large numbers of file uploads, we need to build a solution using Amazon Simple Storage Service (Amazon S3) and AWS Lambda to load files into Amazon RDS for Oracle storage. If the number or size of the files to be transferred to your Amazon RDS for Oracle database is small or infrequent, you can manually move the files to Amazon S3, download the files from Amazon S3 to the Amazon RDS for Oracle database, and finally load or process the files in the database. However, when your business logic requires continual loading and processing of many files, automating this process allows IT organizations to spend their time on other tasks that bring more value to the company. The purpose of this post is to demonstrate how you can use Amazon S3 and Lambda to automate file transfers from a host (on-premises or cloud-based) to an object database directory inside an Amazon RDS for Oracle database local storage. Solution overview This solution demonstrates the automation of file transfers from on premises to Amazon RDS for Oracle databases by using Amazon S3, Lambda, and AWS Secrets Manager. After the files have been uploaded to S3 buckets, an S3 event triggers a Lambda function responsible for retrieving the Amazon RDS for Oracle database credentials from Secrets Manager and copying the files to the Amazon RDS for Oracle database local storage. The following diagram shows this workflow. The implementation of this solution consists of the following tasks: Create an S3 bucket for file uploads to Amazon RDS for Oracle database local storage. Create a Secrets Manager secret for retrieving credentials required to connect to the Amazon RDS for Oracle database. Create AWS Identity and Access Management (IAM) policies and roles required by the solution to interact with Amazon RDS for Oracle, Secrets Manager, Lambda, and Amazon S3. Create a Lambda function for the automation of the file transfers from Amazon S3 to Amazon RDS for Oracle local storage. Configure S3 events to invoke the function on new file uploads. Validate the solution. Prerequisites This post assumes that you can load files directly into Amazon S3 from the host where files are stored, and that you have provisioned an Amazon RDS for Oracle database with Amazon S3 integration. For detailed steps on how to perform this task, see Integrating Amazon RDS for Oracle with Amazon S3 using S3_integration. This also process assumes the following AWS resources have already been provisioned inside your AWS account: A Linux-based workstation to perform deployments, cloud or on-premises Python 3.6 or 3.7 installed on the workstation used to create the AWS services The Amazon Command Line Interface (AWS CLI) installed and configured on the workstation used to create the AWS services The Lambda function must be created in private subnets Connectivity from private subnets to Secrets Manager using NAT Gateway or a VPC endpoint for the Lambda function to retrieve secrets RDS for Oracle database user with the following privileges: CREATE SESSIO SELECT_CATALOG_ROLE EXECUTE on rdsadmin.rdsadmin_s3_tasks EXECUTE on rdsadmin.rds_file_util EXECUTE on rdsadmin.rdsadmin_util Creating the S3 bucket We need to create an S3 bucket or repurpose an existing bucket for file uploads to Amazon RDS. If you want to create a new bucket, use the following instructions: Log in to the Linux workstation where Python and the AWS CLI are installed, using the appropriate credentials via SSH or the terminal emulator of your choice. For example: ssh -i my-creds.pem ec2-user@myLinuxWorkstation Use a unique bucket name such as s3-int-bucket-yyyymmdd-hhmiss in your chosen Region: export myAWSRegion=us-east-1 export myS3Bucket=s3-int-bucket-20201119-184334 aws s3 mb s3://$myS3Bucket --region $myAWSRegion Create a folder under the newly created bucket called incoming-files: aws s3api put-object --bucket $myS3Bucket --key incoming-files/ --region $myAWSRegion Creating Secrets Manager secrets The Lambda function needs a Secrets Manager secret in order to retrieve database credentials to access the Oracle databases securely. The following steps show how to create a new secret for your databases: Create a JSON document containing the information to be stored in the secret, using the following template: db-secrets.json: { "username": "RDS_User_For_S3_Transfer", "password": "XXXX", "engine": "oracle", "host": "FQDN_Of_RDS_EndPoint", "port": 1521, "dbname": "Name_Of_RDS_Database", "dbInstanceIdentifier": "Name_Of_RDS_Instance", "dbtype": "RDS" } Obtain the values for each key pair using the following command: $ aws rds describe-db-instances --db-instance-identifier oracle19 --query "DBInstances[0].[MasterUsername,Engine,Endpoint,DBName,DBInstanceIdentifier]" --region $myAWSRegion [ "oracle", "oracle-ee", { "Address": "oracle19.aaaaaaaaaaaa.us-east-1.rds.amazonaws.com", "Port": 1521, "HostedZoneId": "Z2R2ITUGPM61AM" }, "ORACLE19", "oracle19" ] With the information retrieved from the AWS CLI, we can populate the template: db-secrets.json: { "username": "s3trfadmin", "password": "MyPasswordGoesHere1234!", "engine": "oracle", "host": " oracle19.aaaaaaaaaaaa.us-east-1.rds.amazonaws.com", "port": 1521, "dbname": "ORACLE19", "dbInstanceIdentifier": "oracle19", "dbtype": "RDS" } Note that for the engine value pair, we used oracle instead of oracle-ee. We use the JSON document to create the Secrets Manager secret. For simplicity purposes, we match the name of the secret to our database’s name (oracle19). See the following code: myRDSDbName=oracle19 myAWSSecret=oracle19 aws secretsmanager create-secret --name ${myAWSSecret} --secret-string file://db-secrets.json --region $myAWSRegion Retrieve the Access Resource Name (ARN) for the Secrets Manager secret to use in later steps: aws secretsmanager describe-secret --secret-id $myAWSSecret --query "ARN" --output text --region $myAWSRegion arn:aws:secretsmanager:us-east-1:123123123123:secret:oratrg19-NW8BK1 Creating IAM policies For this post, we create the IAM policy SecretsManagerReadOnly for the Lambda function to use. Use the ARN for the Secrets Manager secret to create a file containing the policy granting permissions on Secrets Manager: secrets-manager-readonly-policy.json: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:GetRandomPassword", "secretsmanager:GetResourcePolicy", "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret", "secretsmanager:ListSecretVersionIds", "secretsmanager:ListSecrets" ], "Resource": "arn:aws:secretsmanager:us-east-1:123123123123:secret:oratrg19-NW8BK1" } ] } Create a policy using the policy document: aws iam create-policy --policy-name SecretsManagerReadOnly --policy-document file://secrets-manager-readonly-policy.json --region $myAWSRegion Verify if the policy was created correctly using the following command: aws iam list-policies | grep '"SecretsManagerReadOnly"' Creating IAM roles Our Lambda function uses the role PythonRDSForLambdaRole. To create the role, follow these steps: Create a file containing the appropriate service trust policy, which associates the new role with a specific AWS service: lambda-trust.json: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Create a role using the trust policy document: aws iam create-role --role-name PythonRDSForLambda --assume-role-policy-document file://lambda-trust.json Verify the role was created: aws iam list-roles |grep '"PythonRDSForLambda" Obtain the AWS account number to use in the next steps. myAWSAcct=$(aws iam list-roles --query 'Roles[*].[Arn]' --output text | grep 'PythonRDSForLambda$' | cut -d: -f5) Attach policies to the role: aws iam attach-role-policy --role-name PythonRDSForLambda --policy-arn "arn:aws:iam::${myAWSAcct}:policy/SecretsManagerReadOnly" aws iam attach-role-policy --role-name PythonRDSForLambda --policy-arn "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess" aws iam attach-role-policy --role-name PythonRDSForLambda --policy-arn "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole" aws iam attach-role-policy --role-name PythonRDSForLambda --policy-arn "arn:aws:iam::aws:policy/service-role/AWSLambdaENIManagementAccess" Replace the string ${myAWSAcct} with your AWS account number if you’re not running the commands from a UNIX or Linux shell. The preceding code attaches the following policies: SecretsManagerReadOnly AmazonS3ReadOnlyAccess AWSLambdaVPCAccessExecutionRole AWSLambdaENIManagementAccess Creating the Lambda function Lambda is a serverless compute service that allows you to run code without having to provision servers, implement complex workload-aware cluster scaling logic, maintain integration with other services, or manage runtimes. Lambda natively supports many popular programming languages, such as Java, Python, Node.js, Go, PowerShell, C#, and Ruby. It also supports other programming languages via a Runtime API. With Lambda, you can run code for virtually any type of application or backend service—all with zero administration. Just upload your code as a ZIP file or container image, and Lambda automatically and precisely allocates compute power and runs your code based on the incoming request or event for any scale of traffic. For this post, our function is responsible for automatically transferring files from the Amazon S3 bucket to the RDS for Oracle instance. To create the function, we require custom Python libraries and Oracle libraries that are packaged alongside the Python code to be implemented for this solution. Complete the following steps: Make a note of the Python version installed on the machine where you create the package to deploy the Lambda function: pythonVersion="python$(python3 --version|cut -c8-10)" Create a custom directory where your code and libraries reside. For this post, the new directory is created under the user’s home directory: cd /home/ec2-user/ mkdir -p s3ToOracleDir Log in to Oracle OTN using your Oracle credentials and download the latest Oracle Instant Client libraries into the work directory /home/ec2-user/s3ToOracleDir. Uncompress the downloaded files and delete them from the current directory: cd /home/ec2-user/s3ToOracleDir unzip -oqq "oraInstantClient*.zip" -d . rm -f oraInstantClientBasic19_6.zip oraInstantClientSDK19_6.zip Delete Oracle Instant Client libraries not required by the Lambda function to reduce the size of the deployment package: cd /home/ec2-user/s3ToOracleDir rm instantclient_19_6/lib*.so.1[!9]* Move the remaining files from the instanclient_19_6 directory to the current directory and delete the instantclient_19_6 directory and ZIP files downloaded for the installation: cd /home/ec2-user/s3ToOracleDir mv instantclient_19_6/* . rmdir instantclient_19_6 Install the cx-Oracle and lxml Python modules required by the Lambda function to interact with the RDS for Oracle DB instance: cd /home/ec2-user/s3ToOracleDir pip3 install -t . cx-Oracle pip3 install -t . lxml Install the libaio library and copy it to the current directory: cd /home/ec2-user/s3ToOracleDir sudo yum install libaio -y cp /usr/lib64/libaio.so.1 . Create a Python script called s3ToOracleDir.py under the /tmp directory using the following code. The sample code is for demonstration purposes only and for simplicity does not provide data encryption in transit. We recommend that your final implementation incorporates your organization’s security policies and AWS Security Best Practices. #Lambda function to transfer files uploaded into S3 bucket using RDS for Oracle S3 Integration import cx_Oracle import boto3 import sys import os from urllib.parse import unquote_plus import json # Variable definitions jSecret = {} flag = False s3Bucket = os.environ['S3BUCKET'] rdsDirectory = os.environ['RDSDIRECTORY'] regName = os.environ['AWSREGION'] secretName = os.environ['SECRETNAME'] print(f'Environment Variablesnn') print(f'AWS Region: {regName}') print(f'AWS Secret Alias: {secretName}') print(f'Amazon S3 Bucket: {s3Bucket}') print(f'AWS RDS Database Directory: {rdsDirectory}') # Initializing AWS resources print('Initializing AWS S3 - Resource') s3 = boto3.resource('s3') print('Initializing AWS S3 - Session') session = boto3.session.Session() print('Initializing AWS Secrets Manager - Client') client = session.client( service_name='secretsmanager', region_name=regName ) print(f'Retrieving secret ({secretName}) from AWS Secrets Manager') secValResp = client.get_secret_value(SecretId=secretName) if None != secValResp.get('SecretString'): jSecret = json.loads(secValResp['SecretString']) else: decoded_binary_secret = base64.b64decode(secValResp['SecretBinary']) jSecret = json.loads(decoded_binary_secret) dsnTnsRds = cx_Oracle.makedsn(jSecret['host'],jSecret['port'],service_name=jSecret['dbname']) print(f'Database Connection String: {dsnTnsRds}') connRds = cx_Oracle.connect(user=jSecret['username'], password=jSecret['password'], dsn=dsnTnsRds) print(f'Target Database Version: {connRds.version}') # When creating the Lambda function, ensure the following setting for LD_LIBRARY_PATH # LD_LIBRARY_PATH = /var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib:/opt/python def lambda_handler(event, context): try: c = connRds.cursor() c.arraysize = 500 commands = [] if 0 < len(event.get('Records','')): print("Beginning file transfers from AWS S3 to AWS RDS for Oracle") # Process each file loaded in S3 bucket for record in event['Records']: bucket = record['s3']['bucket']['name'] fileName = unquote_plus(record['s3']['object']['key']) print(f"Transferring file s3://{bucket}/{fileName} to Oracle directory {rdsDirectory}") sql = "SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3(" sql += "p_bucket_name => '" + s3Bucket + "'," sql += "p_s3_prefix => '" + fileName + "'," sql += "p_directory_name => '" + rdsDirectory + "')" sql += " AS TASK_ID FROM DUAL" print(f"Running: {sql}") c.execute(sql) while True: rows = c.fetchmany(100) if not rows: break for row in rows: print(f"Output: {row[0]}") flag = True e = "Success" else: e = "No S3 events detected" print(f'There are no new files to process on {s3Bucket}') except: statusCode = 500 e = "Error Message: " + str(sys.exc_info()[1]) print(f'{e}') else: statusCode = 200 return { 'statusCode': statusCode, 'body': e } Copy the s3ToOracleDir.py Python script to the directory containing all the libraries and modules: cd /home/ec2-user/s3ToOracleDir cp /tmp/s3ToOracleDir.py . Package all the files, making sure it’s done from within your working directory. (Otherwise, the function can’t find the programs and libraries required for it to run. It’s important to use relative paths for this step to prevent runtime issues with the Lambda function.) See the following code: cd /home/ec2-user/s3ToOracleDir zip -q9r ../s3ToOracleDir.zip . Because the size of the resulting ZIP file is larger than 50 MB, you need to upload the file to Amazon S3, and from there, it can be deployed: aws s3api put-object --bucket $myS3Bucket --key lambda/ --region $myAWSRegion aws s3 cp ../s3ToOracleDir.zip s3://${myS3Bucket}/lambda/ After the package is uploaded, you can use it to create the function as follows: aws lambda create-function --function-name s3ToOracleDir --code S3Bucket=${myS3Bucket},S3Key=lambda/s3ToOracleDir.zip --handler s3ToOracleDir.lambda_handler --runtime ${pythonVersion} --role arn:aws:iam::${myAWSAcct}:role/PythonRDSForLambda --output text --region $myAWSRegion This next step sets the environment variables for Lambda to function properly. In this way, you can alter the behavior of the function without having to change the code. The Lambda environment variable name RDSDIRECTORY should match the name of the Oracle database directory that you use for file storage later on. Set the environment variables with the following code: myRDSDirectory=UPLOAD_DIR aws lambda update-function-configuration --function-name s3ToOracleDir --environment "Variables={LD_LIBRARY_PATH=/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib:/opt/python,S3BUCKET=$myS3Bucket,RDSDIRECTORY=$myRDSDirectory,AWSREGION=$myAWSRegion,SECRETNAME=$myAWSSecret}" --output text --region $myAWSRegion Obtain obtain the subnet and security group IDs from the AWS Management Console or using the following AWS CLI commands: myRDSSubnets=$(aws rds describe-db-instances --db-instance-identifier ${myRDSDbName} --query "DBInstances[0].DBSubnetGroup.Subnets[*].SubnetIdentifier" --output text --region $myAWSRegion|sed "s/t/,/g") myRDSSecGrps=$(aws rds describe-db-instances --db-instance-identifier ${myRDSDbName} --query "DBInstances[0].VpcSecurityGroups[*].VpcSecurityGroupId" --output text --region $myAWSRegion|sed "s/t/,/g") Now we configure the networking and security attributes for the Lambda function for its correct interaction with the Amazon RDS for Oracle database. The function and database must be created in private subnets. Because the function interacts with Secrets Manager, you must enable outside internet access via a NAT Gateway or by creating a VPC endpoint for Secrets Manager. Configure the attributes with the following code: aws lambda update-function-configuration --function-name s3ToOracleDir --vpc-config SubnetIds=${myRDSSubnets},SecurityGroupIds=${myRDSSecGrps} --output text --region $myAWSRegion Creating an S3 event notification The final step is to associate the s3ToOracleDir Lambda function with the S3 bucket we created in the earlier steps. On the Amazon S3 console, choose the bucket you created (for this post, s3-int-bucket-20201119-184334). Choose the Properties Scroll down to the Event notifications section and choose Create event notification. For Event name, enter a name (for this post, s3ToOracleDir). For Prefix, enter incoming-files/, which is the name of the directory we created in the S3 bucket previously. Make sure the prefix ends with the forward slash (/). For Suffix, enter a suffix associated with the file extension that triggers the Lambda function (for this post, .txt). In the Event types section, select All object create events. This selects the Put, Post, Copy, and Multipart upload completed events. For Destination, leave at the default Lambda function. For Specify Lambda function, leave at the default Choose from your Lambda functions. For Lambda function, choose the function we created (S3ToOracleDir). Choose Save changes. Creating an Oracle directory Next, we need to create an Oracle directory on the RDS for Oracle instance (if not already created) to be used for storing the files transferred to the database. The directory name must match the value set for the Lambda RDSDIRECTORY environment variable earlier in the process. To simplify operations, create two small SQL scripts containing the statements associated with creating the database directory and listing its contents, respectively: cd ~ #Script for creating the database directory echo "exec rdsadmin.rdsadmin_util.create_directory(p_directory_name => '$myRDSDirectory');" > createDirectory.sql #Script for listing the contents of the database directory, echo "SELECT * FROM TABLE(rdsadmin.rds_file_util.listdir(p_directory => '$myRDSDirectory'));" > listDirectory.sql Connect to the database using any SQL client as a user who has execute privileges on the rdsadmin packages: sqlplus s3trfadmin@${myRDSDbName} Create the database directory using the creatDirectory.sql script: SQL> @createDirectory.sql List the contents of the newly created directory using the listDirectory.sql script: SQL> @listDirectory.sql Validating the setup The final step is for us to test the solution is working properly. Create a sample text file or use an existing file: ls -l /etc > test_upload.txt Transfer the file to the S3 bucket under the incoming-files folder: aws s3 cp test_upload.txt s3://${myS3Bucket}/incoming-files/ Wait a few seconds and list the contents of the Amazon RDS for Oracle directory: sqlplus s3trfadmin@${myRDSDbName} SQL> @listDirectory.sql You should be able to see your test file listed in the directory. Review the AWS CloudWatch logs associated with the Lambda function to troubleshoot any issues encountered during implementation. Some of the most common issues are associated with an incorrect setup of the Amazon S3 integration for Amazon RDS and communication problems with Secrets Manager or the RDS for Oracle instance due to incorrectly configured routes or security groups. For more information, see Troubleshooting issues in Lambda. Conclusion This post describes how to integrate Amazon RDS for Oracle, Amazon S3, Secrets Manager, and Lambda to create a solution for automating file transfers from Amazon S3 to Amazon RDS for Oracle local storage. You can further enhance this solution to call other Oracle PL/SQL or Lambda functions to perform additional processing of the files. As always, AWS welcomes your feedback, so please leave any comments below. About the Authors Israel Oros is a Database Migration Consultant at AWS. He works with customers in their journey to the cloud with a focus on complex database migration programs. In his spare time, Israel enjoys traveling to new places with his wife and riding his bicycle whenever weather permits. Bhavesh Rathod is a Senior Database Consultant with the Professional Services team at Amazon Web Services. He works as database migration specialist to help Amazon customers to migrate their on-premises database environment to AWS cloud database solutions. https://aws.amazon.com/blogs/database/automating-file-transfers-to-amazon-rds-for-oracle-databases/
0 notes
Text
RMAN Compressed Backupset
RMAN compresses the backup set contents before writing them to disk. The good thing is that no extra uncompressing techniques are required during recovery when we use RMAN compression technique. RMAN having three different types of Compression techniques: Null CompressionUnused Block CompressionBinary CompressionNull Compression: When backing up datafiles into backup sets, RMAN does not back up the contents of data blocks that have never been allocated. This means RMAN will never backup the blocks that are ever used.For example: We have a tablespace having one datafile of size 200MB and out of 200MB only 110 MB is used. Then RMAN will backup only 110 MB. Unused Block Compression: RMAN skips the blocks that do not currently contain data or unused block. There is no extra action is required for the DBA to use this feature.Example: We have a tablespace (HRMS) having one datafile of size 100 MB and out of 100 MB; 43 MB is used by the HRMS objects. Then user dropped PAY_EMPLOYEE_PERSONAL_INFO table of size 18 MB from HRMS tablespace, then with Unused Block Compression technique only 25 MB of the files is backed up where as incase of null compression it backed up 43 MB because Null Compression will consider the blocks that are ever used.Binary Compression: Binary Compression can be done by specifying "AS COMPRESSED" clause in backup command. This option allows the RMAN to perform binary compression and they are automatically decompressed during recovery. This compression technique can greatly reduce the space required for disk backup storage.RMAN> backup as compressed backupset database; There is no special efforts to restore database from the compressed backupsets but to restore from the compressed backpuset you need more time than uncompressed backupsets. To configure RMAN compression: RMAN> CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET; followed by … RMAN> CONFIGURE COMPRESSION ALGORITHM ‘HIGH’; or RMAN> CONFIGURE COMPRESSION ALGORITHM ‘MEDIUM’; or RMAN> CONFIGURE COMPRESSION ALGORITHM ‘LOW’; or RMAN> CONFIGURE COMPRESSION ALGORITHM ‘BASIC’; Oracle 11g added several compression algorithms to compress data. They can be used for compressing tables, LOBs; compressed data pump exports or even RMAN backups. But for some compression algorithms you need to purchase the “Advanced Compression Option”. The compression levels are BASIC, LOW, MEDIUM and HIGH. The compression ratio generally increases from LOW to HIGH, with a trade-off of potentially consuming more CPU resources. If we have enabled the Oracle Database 11g Release 2 Advanced Compression Option, then we can choose from the following compression levels: HIGH - Best suited for backups over slower networks where the limiting factor is network speedMEDIUM - Recommended for most environments. Good combination of compression ratios and speedLOW - Least impact on backup throughput and suited for environments where CPU resources are the limiting factor.SQL> Select * from V$RMAN_COMPRESSION_ALGORITHM;Note: If you do not have the advanced compression license BASIC compression will produce reasonable compression rates at moderate Load.
0 notes
Text
Oracle 12c new features?
Oracle database 12c (c for cloud) a multi tenant database management system introduce so many important new capability in so many areas – database consolidation, query optimization, performance tuning, high availability, partitioning, backup and recovery .Pluggable Databases:In Oracle 12c, in a pluggable database environment, we can create a single database container, and plug multiple databases into this container. All these databases then share the exact same oracle server/background processes and memory, unlike the previous versions where each database has its own background processes and shared memory. This helps in database consolidation and reduces the overhead of managing multiple desperate databases.Consolidation is an important business strategy to reduce the cost of infrastructure and operational expense. In many production database servers, a big portion of CPU cycles go unused. By consolidating many databases into fewer database servers, both the hardware and operational staff can be more effectively utilized.Oracle's new pluggable database feature reduces the risk of consolidation because the DBA can easily plug or unplug an existing database to or from a container database. There is no need to change any code in the application.It is also easy to unplug a database and convert the pluggable database to a traditional database if required. In addition, you can back up and recover pluggable databases independently of the container database; you can also perform a point-in-time recovery of a pluggable database. Further, Resource Manager can be used to control resources consumed by a pluggable database.Optimizer features:Oracle 12c introduces a few useful SQL Optimizer features, and most of these are automatically enabled.It is not uncommon for Optimizer to choose an inefficient execution plan due to incorrect cardinality estimates, invalid statistics, or even stale statistics. This can have dire results. A SQL statement estimated to run for a few seconds might take hours to execute if the chosen execution plan is not optimal.SQL:Identity columns which are auto incremented at the time of insertionSQL> create table emp (emp_id number generated as identity, emp_name varchar);SQL> create table emp (emp_id number generated as identity (start with 1 increment by 1 cache 20 noorder), emp_name varchar;Increased size limit for VARCHAR2, NVARCHAR2, and RAW datatypes to 32K (from 4K).Default on Null (A default value is inserted into the null column).Session private statistics for GTTs (Table and index statistics are held private for each session)UNDO for temporary tables can now managed in TEMP, rather than regular UNDO tablespace.For global temporary tables will not generate UNDO. Reduces contents of regular UNDO allowing better flashback operation.In oracle 12c we are able to make column invisibleSQL> create table ss (column-name column-type invisible); SQL> alter table ss1 modify column-name invisible; SQL> alter table ss1 modify column-name visible;In oracle 12c, No need to shutdown the database for changing Archive log mode.Datapump now allow tuning off redo for the import (only) operation.Now can create duplicate indexes using the same column, in the same order, as an existing index.The truncate command is enhanced with a CASCADE option which allows child record.Oracle 12c allows using DDL inside the SQL statements (PL/SQL inside SQL). Moving and Renaming datafile is now ONLINE, no needs to put datafile in offline.PL/SQL:A role can now be granted to a code unit (PL/SQL Unit Security). Thus one can determine at a very fine grain, which can access a specific unit of code.We can declare PL/SQL functions in the WITH Clause of a select statement.Map Reduce can be run from PL/SQL directly in the database.We can use Booleans values in dynamic PL/SQL. Still no Booleans as database types.ASM:Introduction of Flex ASM, with this feature, database instances uses remote ASM instances. In normal conditions in a node if ASM fails the entire node will be useless, where in 12c the ability to get the extent map from remote ASM instance makes the node useful.Introduction of Flex Cluster, with light weight cluster stack, leaf node and traditional stack hub node (application layer is the typical example of leaf nodes) where they don't require any network heartbeat.Oracle ASM disk scrubbing (Checks for logical data corruptions and repair them automatically.)RMAN:Accidental Drop table, Truncate Table, Wrong scripts human error recovery.RMAN TABLE Point-In-Time Recovery (combination of Data Pump and RMAN, auxiliary instance required).Recover or copy files from Standby databases.Restore & Recover individual tables from RMAN backup.Incremental recovery more faster, many of the tasks removed. You can automate the use of incremental backup to bring the standby db in sync.Import from older export files possibilities.Partitioning:Partitioning enhancements, Multiples partition operations in a single DDL.Online move of a partition (without DBMS_REDEFINTIION).Interval-Ref Partitions - we can create a ref partition (to relate several tables with the same partitions) as a sub-partition to the interval type.Cascade for TRUNCATE and EXCHANGE partition.Asynchronous Global Index maintenance for DROP and TRUNCATE. Command returns instantly, but index cleanup happens later.Patching:Centralized patching - We can test patches on database copies, rolling patches out centrally once testing is complete.Compression: Automated compression with heat map. Optimization can be run on live databases with no disruption. Data optimization will monitor the data usage and with policy archive old data and hot data will be compressed for faster access. Inactive data can be more aggressively compressed or archived, greatly reducing storage costs.Data Guard:1. Oracle Database 12c introduces a new redo transportation method (Fast Sync redo transport) which omits the acknowledgement to primary of the transaction on the standby.2. Creating a new type of redo destination (“Far Sync Standby” composed only of the standby control files), the standby redo logs and some disk space for archive logs which shall be sent to the Standby database. Failover & Switchover operations are totally transparent as the "Far Sync Standby" cannot be used as the target.3. Data Guard Broker commands have been extended. The "validate database" command to checks whether the database is ready for role transition or not.4. Dataguard Broker now supports cascaded standby.5. Global Temporary Tables can now be used on an Active Guard standby database.New Views/Packages in Oracle 12c Releasedba_pdbs, v$pdbs, cdb_data_files, dbms_pdb, dbms_qopatchUsing DBUA to upgrade the existing database is the simple and quickest method.For step by step details follow the below link: Upgrade Oracle Database 11g to 12c
0 notes
Text
Data Pump Mode
Oracle DBA professional is there for you to make your career in Oracle.
During the previous year we have seen 3 new editions of Oracle: 9i, 10g and 11g.There is an often requested and I do ask well which is the best or which are the best popular functions of Oracle.
Real Program Groups and Information Secure in 9i
Grid Management and Information Push in 10g
Automatic SQL Adjusting and Edition-Based Redefinition in 11g
There have been so many new, excellent and essential functions but let us say that they all are in the improvement of humanity and the latest growth of IT. Perhaps?
In 11gR2, Oracle chooses introducing Information Push Heritage Method to be able to offer in reverse interface for programs and parameter data files used for exclusive export/import programs. The certification temporarily says: “This function allows customers to proceed using exclusive Trade and Transfer programs with Information Push Trade and Transfer. Growth time is decreased as new programs do not have to be designed.”
If you examine AristaDBA’s Oracle Weblog and browse the first three sections you will probably see what it is about. I completely believe the fact with all published there.
And I don’t really get it, why do we have Information Push Heritage Mode? Oracle customers poorly need it? Exp/imp was so, so great that we must have it back? How about RAC Heritage Method if I want to use gc_files_to_lock or freeze_db_for_fast_instance_recovery parameters? There really was a parameter known as freeze_db_for_fast_instance_recovery, I am not creating this up. Run this one:
SELECT kspponm,
DECODE(ksppoflg, 1,'Obsolete', 2, 'Underscored') as "Status"
FROM x$ksppo
WHERE kspponm like '%freeze%'
ORDER BY kspponm;
However, the Information Push Heritage Method function prevails and once you use any traditional export/import parameter you allow data pump in legacy mode. Just one parameter is enough. In Oracle language, Information Push goes into legacy mode once it chooses a parameter exclusive to exclusive Trade or Transfer is existing. Of course some factors like shield, make, pack, object_consistent, recordlength, resumable, research, gather, filesize, tts_owners, streams_configuration and streams_instantiation are merely ignored.
Now, here is a paradox or simple certification error: deferred_segment_creation is set to TRUE automatically in 11gR2. Have a look at the documentation:
SQL> create table TEST (c1 number, c2 varchar2(10), c3 date) storage (initial 5M);
Table created.
SQL> select bytes, blocks, segment_type, segment_name from dba_segments where segment_name='TEST';
no rows selected
C:\>expdp julian/password dumpfile=data_pump_dir:abc_%U.dat schemas=julian include=TABLE:in('TEST') logfile=abc.log buffer=1024
Export: Release 11.2.0.2.0 - Production on Wed May 11 07:53:45 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "buffer=1024" Location: Command Line, ignored.
Legacy Mode has set reuse_dumpfiles=true parameter.
Starting "JULIAN"."SYS_EXPORT_SCHEMA_01": julian/******** dumpfile=data_pump_dir:abc_%U.dat schemas=julian
include=TABLE:in('TEST') logfile=abc.log reuse_dumpfiles=true
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "JULIAN"."TEST" 0 KB 0 rows
Master table "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for JULIAN.SYS_EXPORT_SCHEMA_01 is:
C:\ORACLE\ADMIN\JMD\DPDUMP\ABC_01.DAT
Job "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully completed at 07:54:07
But it is not true that you cannot export tables with no segments. Here is the proof:
C:\>exp userid=julian/abc file=C:\Oracle\admin\JMD\dpdump\a.dmp tables=julian.test
Export: Release 11.2.0.2.0 - Production on Wed May 11 08:31:08 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in AL32UTF8 character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path ...
. . exporting table TEST 0 rows exported
Export terminated successfully without warnings.
But ignore about this Heritage Method. Do not use. Imagine it does not quit
Let us look now at some new popular functions of Information Push. Remember that in 11gR2 several of limitations have been already removed:
– The limitation that in TABLES mode all tables had to live in the same schema.
– The limitation that only one item (table or partition) could be specified if wildcards were used as aspect of the item name.
For RAC users: Information Push employee procedures can now be allocated across RAC circumstances, a aspect of Oracle RAC circumstances, or limited to the example where the Information Push job begins. It is also now possible to begin Information Push tasks and run them on different Oracle RAC circumstances at the same time.
For XMLType line users: there is a new DISABLE_APPEND_HINT value on the DATA_OPTIONS parameter which hinders the APPEND sign while running the information item.
For EBR users: particular versions can be released and brought in. Using the SOURCE_EDITION parameter upon trade and the transfer TARGET_EDITION parameter you can trade a certain version of the data source and transfer into a particular version of the data source.
Information Push is among the Oracle resources with least bugs! Although it challenging to say these days what is a bug, what is an element, what is a copy enhancement. There are about 100 kinds of insects.
Thus our Oracle training course is always there for you to be DBA professional.
#oracle course#sql training institutes in pune#oracle training#oracle careers#oracle dba certification#oracle dba course#oracle certification#oracle certification courses#oracle dba jobs#best oracle training
0 notes
Text
Export Backup Automation in Oracle On Linux
Export Backup Automation in Oracle On Linux #oracle #oracledba #oracledatabase
Hello, friends in this article we are going to learn how to schedule database export backup automatically. Yes, it is possible to export backup automation with a crontab scheduler. How to schedule Export Backup Using shell scripting, we can schedule the export backup as per our convenient time. Using the following steps we can schedule expdp backup. Step 1: Create Backup Location First, we…

View On WordPress
#expdp command in oracle#expdp from oracle client#expdp include jobs#expdp job status#export and import in oracle 12c with examples#how to check impdp progress in oracle#how to export only table data in oracle#how to export oracle database using command prompt#how to export schema in oracle 11g using expdp#how to stop impdp job in oracle#impdp attach#impdp commands#oracle data pump export example#oracle data pump tutorial#oracle export command#start_job in impdp
0 notes
Text
Java with K8S using MySQL MDS
pre { background: lightgrey; font-size: 14px; border: 2px solid grey; padding: 14px; } Oracle Cloud Infrastructure (OCI) is the Oracle Cloud Platform where the tutorial leverages 3 services – namely the MySQL Database Service (MDS), Kubernetes Container (K8s) under Developer Services and Compute VM services. This article is written to provide the steps to provision MDS, K8s and Compute VM services. It also provides the steps to create a docker image from Image:java:latest as tar for local node image repository import (loading). 2 Java Client sources are used with the tutorial. Helloworld.java – it is a simple hello world output java source HelloDB.java – it is multi-threaded java application with Runnable Thread to pump Data into MySQL Database Service. It uses the Connect/J (MySQL JDBC Driver) to connect to MySQL MDS. MySQL Shell is the client tool to administrate and watch the data in MySQL Database Service(MDS). Pre-requisites : Compartment Virtual Cloud Network – VCN Internet Gateway and Default Routing Public subnet – the Public Network for the COMPUTE VM so that externally we can access to the VM (to be used in Compute subnet) Private subnet – All services (MDS, K8s) are running within the subnet Security Rule defined to allow Private Subnet and Public Subnet to communicate with regards to the port 3306 / 33060. MySQL Shell running on Public Subnet with Compute VM is able to connect to MDS on Private Subnet. COMPUTE – SSH private and public key pair Part I : MySQL Database Service – DB System Choosing from the OCI Menu : MySQL (DB Systems) Click “Create MySQL DB Systems” Fill in the DB Systems details Choose compartment, VCN/Network settings and do not forget to specify credentials. The rest of the information is easy, just follow the wizard. Click “Create” to provision MySQL DB Systems You will be able to see the DB System in the 'Creating' status. Part II Developer Service – Kubernetes Clusters Choosing from the OCI Menu : Developer Services (Kubernetes Clusters) Click “Create Cluster” and choose “Custom Create” Specify the name and Compartment and Choose K8s version (in this tutorial, v1.16.8 is chosen), and Click “Next” Specify Network settings and choose the VCN and public subnet, and click “Next” Specify the node pool settings – Name, version, Image, Shape (for tutorial purpose, choose the smallest VM Shape for the tutorial, 1 as number of nodes, choose the Availability Domain and private subnet from the VCN, and specify the SSH public key, and click “Next” Finally, review the settings and click “Create Cluster” to provision the cluster Part III Compute Service The Cloud Compute VM is used to access the MySQL Database Service and the Kubernetes Clusters. Choose OCI Menu – Compute (Instances) and Click “Create Instance” Specify the VM details – Name, Compartment and choose Image, Select Availability Domain and Choose VM Shape Specify the Network Settings – Compartment, VCN, and choose the public subnet and choose “ASSIGN a PUBLIC IP ADDRESS” Specify the boot volume detail and the SSH Public key details, and click “Create” When the Compute VM is provisioned, it is assigned with public IP address Part IV : Configure Compute VM The VM is created with default username as ‘opc’ and it is authenticated based on key authentication setting from previous section. With the SSH private/public key pair (e.g. private key file as id_rsa.txt), login to the public IP address of the Compute VM via ‘ssh’. If putty is used, the .ppk format for the private key is used. ssh -i ./id_rsa.txt opc@[public ip of the COMPUTE] Installing OCI – CLI : Refer to the Installation Materials https://docs.cloud.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm?tocpath=Developer%20Tools%20%7CCommand%20Line%20Interface%20(CLI)%20%7C_____1 # bash -c "$(curl -L https://raw.githubusercontent.com/oracle/oci-cli/master/scripts/install/install.sh)" Configure OCI - To have the CLI setup walk you through the first-time setup process, oci setup config Specify the Tenancy’s OCID and User’s OCID – Refer to your User Profile about User’s OCID and Tenancy’s OCID. Choose the Regions and Availability Domain And specify the “Y” to generate the API Signing RSA key pair. The key pair files is generated as $HOME/.oci/.pem and $HOME/.oci/_public.pem Define the API key with the given key Select API Keys on the left menu Add Public Key (drop the file or paste the content from the _public.pem) Install Docker-engine sudo yum install docker-engine sudo systemctl enable --now docker Install kubectl and kubeadm – it is configured to access the Kubernetes Clusters sudo yum install kubectl kubeadm mkdir -p $HOME/.kube oci ce cluster create-kubeconfig --cluster-id --file $HOME/.kube/config --region --token-version 2.0.0 export KUBECONFIG=$HOME/.kube/config Installing mysql shell from MySQL Community repository Referring to the URL : https://dev.mysql.com/downloads/repo/yum/ and choose the rpm with the corresponding Linux Version (for example with Oracle Linux 7) sudo yum install https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm sudo yum install mysql-shell Once the installation has been completed, you can try simple commands for Accessing Docker : sudo docker image list sudo docker ps -a Accessing Kubernetes : kubectl get nodes kubectl get pods --all-namespaces Accessing MySQL Database Services using MySQL Shell : Using MySQL Shell to login to MDS with SQL interface mysqlsh --uri @ --sql MySQL Shell> create database if not exists test; To quit / exit of the mysql shell : MySQL Shell> q Part V – Simple Java Hello world in OCI K8s Building Docker Image for Java Application - helloworld Simple Java “helloworld” application Creating folders : mkdir -p helloworld helloworld/src helloworld/image Java Source : helloworld/HelloWorld.java public class HelloWorld{ public static void main(String[] args){ System.out.println("Hello World!!!"); } } Creating Dockerfile ( filename : helloworld/Dockerfile ) : The Docker file refers to the “image” from ‘java:latest’. It creates the structure bin and src under the working directory /root/java. The image contains the java source HelloWorld.java which prints the ‘hello world’ string on screen. It builds the image to compile the source and puts the class file to /root/java/bin. The ENTRYPOINT is to start the execution with “classpath” pointing to “bin” to execute HelloWorld.class. Dockerfile : FROM java:latest COPY src /root/java/src WORKDIR /root/java RUN mkdir bin RUN javac -d bin src/HelloWorld.java ENTRYPOINT ["java", "-cp", "bin", "HelloWorld"] Building the Docker Image for “helloworld” The image name is “my-java-helloworld” as example : cd helloworld; sudo docker build -t my-java-helloworld . Export Docker Image as Tar cd helloworld; sudo docker save my-java-helloworld > image/helloworld.tar Import Docker Image to Local Docker Repository on node Although the docker image can be imported to Docker Registry, this tutorial is simply to load the docker image to the local node repository. Identify the node Private IP Address from the kubectl command kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 10.0.1.7 Ready node 46h v1.16.8 10.0.1.7 Oracle Linux Server 7.8 4.14.35-1902.304.6.el7uek.x86_64 docker://18.9.8 Copy the tar image to the node using scp and the private key (id_rsa.txt) scp -i id_rsa.txt image/helloworld.tar opc@[node’s internal-ip]:/home/opc SSH Login to the Private IP using opc and the private key (id_rsa.txt) and load the image to local repository ssh -i id_rsa.txt opc@[worker node’s private ip] With the Shell command from the K8s Worker node terminal docker image load --input /home/opc/helloworld.tar docker image list Output sample as : [opc@oke-crwgzjvgbrd-nrweyjugy4w-snmxcmvxhda-0 ~]$ docker image list REPOSITORY TAG IMAGE ID CREATED SIZE my-java-helloworld latest e6357e2cfc8d 6 days ago 643MB 742kB Creating Yaml file for helloworld (file: helloworld/helloworld.yaml) – This yaml file is to create a pod named as “myhelloworld” based on the image from imported “my-java-helloworld’. The setting with “imagePullPolicy as Never” is to allow the image to used locally from the local node repository. The setting “restartPolicy:OnFailure” is to set it as “execute once” instead of a daemon. apiVersion: v1 kind: Pod metadata: name: myhelloworld spec: containers: - name: myhelloworld image: my-java-helloworld imagePullPolicy: Never restartPolicy: OnFailure Creating a namespace ‘demo’ and Apply the yaml file kubectl create namespace demo01 kubectl apply -f helloworld.yaml --namespace demo01 kubectl get pods --namespace demo01 kubectl logs myhelloworld --namespace demo01 output sample : [opc@ivanma-demo1 buildJavaImage]$ kubectl create namespace demo01 namespace/demo01 created [opc@ivanma-demo1 buildJavaImage]$ kubectl apply -f helloworld.yaml --namespace demo01 pod/myhelloworld created [opc@ivanma-demo1 buildJavaImage]$ kubectl get pods --namespace demo01 NAME READY STATUS RESTARTS AGE myhelloworld 0/1 Completed 0 4s [opc@ivanma-demo1 buildJavaImage]$ kubectl logs myhelloworld --namespace demo01 Hello World!!! Part VI : Java DB application with MDS Building Docker Image for Java DB Application - HelloDB Simple Java “HelloDB” application Create folders : mkdir -p hellodb hellodb/src hellodb/image Download ConnectorJ for MySQL and copy the tar file to folder “src” Referring to URL : https://dev.mysql.com/downloads/connector/j/ Choose the “Platform Independent” and download the tar Or as of ver 8.0.21, the following command to “wget” the file directly cd /home/opc/hellodb; wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-8.0.21.tar.gz tar -xvf mysql-connector-java-8.0.21.tar.gz mv mysql-connector-java-8.0.21/mysql-connector-java-8.0.21.jar src Java Source : hellodb/HelloDB.java - Download the file from Github Change the following private variables with the MDS IP address details and the corresponding username and password accordingly. private static String database = "test"; private static String baseUrl = "jdbc:mysql://" + "address=(protocol=tcp)(type=master)(host=)(port=3306)/" + database + "?verifyServerCertificate=false&useSSL=true&" + "loadBalanceConnectionGroup=first&loadBalanceEnableJMX=true"; private static String user = ""; private static String password = ""; Creating Dockerfile ( filename : hellodb/Dockerfile ) : The Docker file refers to the “image” from ‘java:latest’. It creates the structure bin and src under the working directory /root/java. The docker image contains the java source HelloDB.java; it does the following It creates tables test.mytable (f1 int auto_increment not null primary key, f2 varchar(200)) engine=innodb;") Starting 10 threads Each thread executes a batch size of 10 to insert data to test.mytable for 10 iterations. The Dockerfile builds the image to compile the source. The Dockerfile copies the class files to /root/java/bin including the connector-j tar file from the src folder The ENTRYPOINT is to start the execution with “classpath” pointing to “bin” and the “connect-j tar” to execute HelloDB.class. Dockerfile : FROM java:latest COPY src /root/java/src WORKDIR /root/java RUN mkdir bin COPY src/mysql-connector-java-8.0.21.jar bin RUN javac -d bin src/HelloDB.java ENTRYPOINT ["java", "-cp", "bin:bin/mysql-connector-java-8.0.21.jar", "HelloDB"] Building the Docker Image for “my-hellodb” The image name is “my-hellodb” as example : cd /home/opc/hellodb; sudo docker build -t my-hellodb . Export Docker Image as Tar cd /home/opc/hellodb; sudo docker save my-hellodb > image/hellodb.tar Import Docker Image to Local Docker Repository on node Although the docker image can be imported to Docker Registry, this tutorial is simply to load the docker image to the local node repository. Identify the node Private IP Address from the kubectl command # kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 10.0.1.7 Ready node 46h v1.16.8 10.0.1.7 Oracle Linux Server 7.8 4.14.35-1902.304.6.el7uek.x86_64 docker://18.9.8 Copy the tar image to the node using scp and the private key (id_rsa.txt) scp -i id_rsa.txt image/hellodb.tar opc@[node’s internal-ip]:/home/opc SSH Login to the Private IP using opc and the private key (id_rsa.txt) and load the image to local repository ssh -i id_rsa.txt opc@[worker node’s private ip] With the Shell command on the K8s worker node's Terminal docker image load --input /home/opc/hellodb.tar docker image list Output sample as : [opc@oke-crwgzjvgbrd-nrweyjugy4w-snmxcmvxhda-0 ~]$ docker image list REPOSITORY TAG IMAGE ID CREATED SIZE my-hellodb latest 53b4b1c08d64 32 minutes ago 648MB my-java-helloworld latest e6357e2cfc8d 6 days ago 643MB Creating Yaml file for hellodb (file: hellodb/hellodb.yaml) – This yaml file is to create a pod named as “my-hellodb” based on the image from imported “my-hellodb. The setting with “imagePullPolicy as Never” is to allow the image to used locally from the local node repository. The setting “restartPolicy:OnFailure” is to set it as “execute once” instead of a daemon. apiVersion:v1 kind: Pod metadata: name: my-hellodb spec: containers: - name: my-hellodb image: my-hellodb imagePullPolicy: Never restartPolicy: OnFailure Creating a namespace ‘demo02’ and Apply the yaml file kubectl create namespace demo02 kubectl apply -f hellodb.yaml --namespace demo02 kubectl get pods --namespace demo02 kubectl logs my-hellodb --namespace demo02 output sample : [opc@ivanma-demo1 hellodb]$ kubectl logs my-hellodb --namespace demo02 Thread ID(0) - Iteration(0/10) Thread ID(1) - Iteration(0/10) Thread ID(2) - Iteration(0/10) Thread ID(3) - Iteration(0/10) Thread ID(4) - Iteration(0/10) Thread ID(5) - Iteration(0/10) Thread ID(6) - Iteration(0/10) Thread ID(7) - Iteration(0/10) Thread ID(8) - Iteration(0/10) Spawned threads : 10 Thread ID(9) - Iteration(0/10) ……. Thread ID(6) - Iteration(9/10) Thread ID(4) - Iteration(9/10) Finished - 57332 Open another Terminal to the Compute VM from your PC ssh -i [id_rsa.txt] opc@[Compute’s public IP] mysqlsh --uri [MDS user]:[MDS password]@[MDS IP] MySQL Shell > watch query select count(*) from test.mytable While the MySQL Shell terminal is showing the count(*), on the separate terminal, re-apply the yaml to see the count(*) changes. You can see the count(*) growing from 1000 to 2000. (10 threads inserts 10 iterations x 10 rows in a batch which is 1000 rows) kubectl delete pod my-hellodb --namespace demo02 kubectl apply -f hellodb.yaml --namespace demo02 Finally, Clean up namespace “demo01” and “demo02”. # kubectl delete namespace demo01 # kubectl delete namespace demo02 That is the end of this "MDS + Java + K8S" tutorial. https://blogs.oracle.com/mysql/java-with-k8s-using-mysql-mds
0 notes